The most useful data mining primitives are distance measures. With an effective distance measure, it is possible to perform classification, clustering, anomaly detection, segmentation, etc. For single-event time series Euclidean Distance and Dynamic Time Warping distance are known to be extremely effective. However, for time series containing cyclical behaviors, the semantic meaningfulness of such comparisons is less clear. For example, on two separate days the telemetry from an athlete workout routine might be very similar. The second day may change the order in of performing push-ups and squats, adding repetitions of pull-ups, or completely omitting dumbbell curls. Any of these minor changes would defeat existing time series distance measures. Some bag-of-features methods have been proposed to address this problem, but we argue that in many cases, similarity is intimately tied to the shapes of subsequences within these longer time series. In such cases, summative features will lack discrimination ability. In this work we introduce PRCIS, which stands for Pattern Representation Comparison in Series. PRCIS is a distance measure for long time series, which exploits recent progress in our ability to summarize time series with dictionaries. We will demonstrate the utility of our ideas on diverse tasks and datasets.
translated by 谷歌翻译
Most existing distillation methods ignore the flexible role of the temperature in the loss function and fix it as a hyper-parameter that can be decided by an inefficient grid search. In general, the temperature controls the discrepancy between two distributions and can faithfully determine the difficulty level of the distillation task. Keeping a constant temperature, i.e., a fixed level of task difficulty, is usually sub-optimal for a growing student during its progressive learning stages. In this paper, we propose a simple curriculum-based technique, termed Curriculum Temperature for Knowledge Distillation (CTKD), which controls the task difficulty level during the student's learning career through a dynamic and learnable temperature. Specifically, following an easy-to-hard curriculum, we gradually increase the distillation loss w.r.t. the temperature, leading to increased distillation difficulty in an adversarial manner. As an easy-to-use plug-in technique, CTKD can be seamlessly integrated into existing knowledge distillation frameworks and brings general improvements at a negligible additional computation cost. Extensive experiments on CIFAR-100, ImageNet-2012, and MS-COCO demonstrate the effectiveness of our method. Our code is available at https://github.com/zhengli97/CTKD.
translated by 谷歌翻译
Video, as a key driver in the global explosion of digital information, can create tremendous benefits for human society. Governments and enterprises are deploying innumerable cameras for a variety of applications, e.g., law enforcement, emergency management, traffic control, and security surveillance, all facilitated by video analytics (VA). This trend is spurred by the rapid advancement of deep learning (DL), which enables more precise models for object classification, detection, and tracking. Meanwhile, with the proliferation of Internet-connected devices, massive amounts of data are generated daily, overwhelming the cloud. Edge computing, an emerging paradigm that moves workloads and services from the network core to the network edge, has been widely recognized as a promising solution. The resulting new intersection, edge video analytics (EVA), begins to attract widespread attention. Nevertheless, only a few loosely-related surveys exist on this topic. A dedicated venue for collecting and summarizing the latest advances of EVA is highly desired by the community. Besides, the basic concepts of EVA (e.g., definition, architectures, etc.) are ambiguous and neglected by these surveys due to the rapid development of this domain. A thorough clarification is needed to facilitate a consensus on these concepts. To fill in these gaps, we conduct a comprehensive survey of the recent efforts on EVA. In this paper, we first review the fundamentals of edge computing, followed by an overview of VA. The EVA system and its enabling techniques are discussed next. In addition, we introduce prevalent frameworks and datasets to aid future researchers in the development of EVA systems. Finally, we discuss existing challenges and foresee future research directions. We believe this survey will help readers comprehend the relationship between VA and edge computing, and spark new ideas on EVA.
translated by 谷歌翻译
Speech representation learning has improved both speech understanding and speech synthesis tasks for single language. However, its ability in cross-lingual scenarios has not been explored. In this paper, we extend the pretraining method for cross-lingual multi-speaker speech synthesis tasks, including cross-lingual multi-speaker voice cloning and cross-lingual multi-speaker speech editing. We propose a speech-text joint pretraining framework, where we randomly mask the spectrogram and the phonemes given a speech example and its transcription. By learning to reconstruct the masked parts of the input in different languages, our model shows great improvements over speaker-embedding-based multi-speaker TTS methods. Moreover, our framework is end-to-end for both the training and the inference without any finetuning effort. In cross-lingual multi-speaker voice cloning and cross-lingual multi-speaker speech editing tasks, our experiments show that our model outperforms speaker-embedding-based multi-speaker TTS methods. The code and model are publicly available at PaddleSpeech.
translated by 谷歌翻译
从嘈杂的点云中恢复高质量的表面,称为点云降级,是几何处理中的一个基本而又具有挑战性的问题。大多数现有方法要么直接将嘈杂的输入或过滤器原始正态变为更新点位置。由点云降解和正常过滤之间的基本相互作用的动机,我们从多任务的角度重新访问点云,并提出一个名为PCDNF的端到端网络,以通过关节正常滤波来denoise点云。特别是,我们引入了一项辅助正常过滤任务,以帮助整体网络更有效地消除噪声,同时更准确地保留几何特征。除了整体体系结构外,我们的网络还具有两个新型模块。一方面,为了提高降噪性能,我们设计了一种形状感知的选择器,以全面考虑学习点,正常特征和几何学先验,以构建特定点的潜在切线空间表示。另一方面,点特征更适合描述几何细节,正常特征更有利于表示几何结构(例如,边缘和角落)。结合点和正常特征使我们能够克服它们的弱点。因此,我们设计一个功能改进模块,以融合点和正常功能,以更好地恢复几何信息。广泛的评估,比较和消融研究表明,所提出的方法在点云降解和正常过滤方面优于最先进的方法。
translated by 谷歌翻译
完全监督的对数异常检测方法需要大量标记的数据才能实现有希望的性能。因此,如何减轻注释大量未标记的日志数据的沉重负担受到了很多关注。最近,已经提出了许多半监督对数异常检测方法,以借助于标记的正常数据解析的模板来降低注释成本。但是,这些方法通常独立考虑每个关键字,这无视日志事件中关键字之间的相关性以及日志序列之间的上下文关系。在本文中,我们提出了一个新型的弱监督的对数异常检测框架,名为Loglg,以探索序列中关键字之间的语义连接。具体而言,我们设计了一个迭代过程,首先提取未标记的日志的关键字以在每次迭代中构造日志事件图。然后,我们构建一个子记录注释,以更改为未标记的日志序列生成伪标签的目的,以注释相应的log-subgraphs。为了改善注释质量,我们采取了自我监督的任务来预先培训子图注释。之后,使用子图注释者生成的伪标签训练对数异常检测模型。在分类结果的条件下,我们从分类的日志序列重新提取关键字,并为下一个迭代更新日志事件图。五个基准的实验验证了LogLG在未标记的日志数据上检测异常的有效性,并证明与现有的半监督方法相比,Loglg作为最新的弱监督方法,可以取得重大改进。
translated by 谷歌翻译
最近,语音表示学习改善了许多与语音有关的任务,例如语音识别,语音分类和语音到文本翻译。但是,以上所有任务都朝着语音理解的方向发展,但是对于反向方向,言语综合,由于产生高质量语音的挑战性质,代表性学习的潜力尚未实现。为了解决这个问题,我们提出了我们的框架,对准的声音文本预处理($^3 $ t),该框架在培训期间重建了带有文本输入和声学文本对齐的蒙面声信号。通过这种方式,预处理的模型可以生成高质量的重建频谱图,可以直接应用于语音编辑和看不见的扬声器tts。实验显示了$^3 $ t在语音编辑上的SOTA模型,并在没有外部说话者验证模型的情况下改善了多扬声器语音综合。
translated by 谷歌翻译
软致动器在符合性和形态方面表现出具有很大的优势,用于操纵细腻物体和在密闭空间中的检查。对于可以提供扭转运动的软致动器有一个未满足的需要。放大工作空间并增加自由度。为此目标,我们呈现由硅胶制成的折纸启发的软充气执行器(OSPas)。原型可以输出多于一个旋转的旋转(高达435 {\ DEG}),比以前的同行更大。我们描述了设计和制作方法,构建了运动学模型和仿真模型,并分析和优化参数。最后,我们通过整合到能够同时抓住和提升脆弱或扁平物体的夹具,这是一种能够与扭转致动器的直角拾取和放置物品的多功能机器人,以及柔软的蛇通过扭转致动器的扭转能够改变姿态和方向的机器人。
translated by 谷歌翻译
在监控和搜索和救援应用程序中,重要的是在低端设备上实时执行多目标跟踪(MOT)。今天的MOT解决方案采用深度神经网络,往往具有高计算复杂性。识别帧大小对跟踪性能的影响,我们提出了深度,一种模型不可知框架尺寸选择方法,可在现有的全卷积网络基跟踪器之上进行操作,以加速跟踪吞吐量。在培训阶段,我们将可检测性分数纳入单次跟踪器架构,使得DeepScale以自我监督的方式学习不同帧大小的表示估计。在推理期间,它可以根据基于用户控制参数根据视觉内容的复杂性来调整帧大小。为了利用边缘服务器上的计算资源,我们提出了两个计算分区模式,即仅使用自适应帧大小传输和边缘服务器辅助跟踪仅适用于MOT,即边缘服务器。 MOT数据集的广泛实验和基准测试证明了深度的有效性和灵活性。与最先进的追踪器相比,DeepScale ++,DeepScale的变种实现1.57倍加速,仅在一个配置中的MOT15数据集上跟踪准确性。我们已经实现和评估了DeepScale ++,以及由NVIDIA JETSON TX2板和GPU服务器组成的小型测试平台上所提出的计算分区方案。实验显示与仅服务器或智能相机的解决方案相比跟踪性能和延迟之间的非琐碎权衡。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译